It s been a while that I haven t posted anything on my blog, the truth is that Freexian has been doing very well in the last years and that I have a hard time to allocate time to write articles or even to contribute to my usual Debian projects the exception being debusine since that s part of the Freexian work (have a look at our most recent announce!).
That being said, given Freexian s growth and in the hope to reduce my workload, we are looking to extend our team with Debian members of more varied backgrounds and skills, so they can help us in areas like sales / marketing / project management. Have a look at our announce on debian-jobs@lists.debian.org.
As a mission-oriented company, we are looking to work with persons already involved in Debian (or persons who were waiting the right opportunity to get involved). All our collaborators can spend 20% of their paid work time on the Debian projects they care about.
Note: The currency used in this post is Indian Rupees, which was around 83 INR for 1 US Dollar as that time.
I and my friend Badri visited the Taj Mahal this month. Taj Mahal is one of the main tourist destinations in India and does not need an introduction, I guess. It is in Agra, in the state of Uttar Pradesh, 188 km from Delhi by train. So, I am writing a post documenting useful information for people who are planning to visit Taj Mahal. Feel free to ask me questions about visiting the Taj Mahal.
We had booked a train from Delhi to Agra. The name of the train was Taj Express, and its scheduled departure time from Hazrat Nizamuddin station in Delhi is 07:08 hours in the morning, and its arrival time at Agra Cantt station is 09:45. So, we booked a retiring room at the Old Delhi railway station for the previous night. This retiring room was hard to find. We woke up at 05:00 in the morning and took the metro to Hazrat Nizamuddin station. We barely reached the station in time, but anyway, the train was not yet at the station; it was late.
We reached Agra at 10:30 and checked into our retiring room, took rest and went out for Taj Mahal at 13:00 in the afternoon. Taj Mahal s outer gate is 5 km away from the Agra Cantt station. As we were going out of the railway station, we were chased by an autorickshaw driver who offered to go to Taj Mahal for 150 INR for both of us. I asked him to bring it down to 60 INR, and after some back and forth, he agreed to drop us off at Taj Mahal for 80 INR. But I said we won t pay anything above 60 INR. He agreed with that amount but said that he would need to fill up with more passengers. When we saw that he wasn t making any effort in bringing more passengers, we walked away.
As soon as we got out of the railway station complex, an autorickshaw driver came to us and offered to drop us off at Taj Mahal for 20 INR if we are sharing with other passengers and 100 INR if we reserve the auto for us. We agreed to go with 20 INR per person, but he started the autorickshaw as soon as we hopped in. I thought that the third person in the auto was another passenger sharing a ride with us, but later we got to know he was with the driver. Upon reaching the outer gate of Taj Mahal, I gave him 40 INR (for both of us), and he asked to instead give 100 INR as he said we reserved the auto, even though I clearly stated before taking the auto that we wanted to share the auto, not reserve it. I think this was a scam. We walked away, and he didn t insist further.
Taj Mahal entrance was like 500 m from the outer gate. We went there and bought offline tickets just outside the West gate. For Indians, the ticket for going inside the Taj Mahal complex is 50 INR, and a visit to the mausoleum costs 200 INR extra.
We came out of the Taj Mahal complex at 18:00 and stopped for some tea and snacks. I also bought a fridge magnet for 30 INR. Then we walked back towards Agra Cantt station, as we had a train for Jaipur at midnight. We were hoping to find a restaurant along the way, but we didn t find any that we found interesting, so we just ate at the railway station. During the return trip, we noticed there was a bus stand near the station, which we didn t know about. It turns out you can catch a bus to Taj Mahal from there. You can click here to check out the location of that bus stand on OpenStreetMap.
Expenses
These were our expenses per person
Retiring room at Delhi Railway Station for 12 hours 131
Train ticket from Delhi to Agra (Taj Express) 110
Retiring room at Agra Cantt station for 12 hours 450
Auto-rickshaw to Taj Mahal 20
Taj Mahal ticket (including going inside the mausoleum): 250
Food 350
Important information for visitors
Taj Mahal is closed on Friday.
There are plenty of free-of-cost drinking water taps inside the Taj Mahal complex.
Ticket price for Indians is 50, for foreigners and NRIs it is 1100, and for people from SAARC/BIMSTEC is 540. 200 extra for the mausoleum for everyone.
A visit inside the mausoleum requires covering your shoes or removing them. Shoe covers costs 10 per person inside the complex, but are probably involved free of charge in foreigner tickets. We could not find a place to keep our shoes, but some people managed to enter barefoot, indicating there must be some place to keep your shoes.
Mobile phones and cameras are allowed inside the Taj Mahal, but not eatables.
We went there on March 10th, and the weather was pleasant. So, we recommend going around that time.
Regarding the timings, I found this written near the ticket counter: Taj Mahal opens 30 minutes before sunrise and closes 30 minutes before sunset during normal operating days, so the timings are vague. But we came out of the complex at 18:00 hours. I would interpret that to mean the Taj Mahal is open from 07:00 to 18:00, and the ticket counter closes at around 17:00. During the winter, the timings might differ.
The cheapest way to reach Taj Mahal is by bus, and the bus stop is here
syncoid to TrueNAS In my homelab, I have 2 NAS systems:
Linux (Debian) TrueNAS Core (based on FreeBSD) On my Linux box, I use Jim Salter s sanoid to periodically take snapshots of my ZFS pool. I also want to have a proper backup of the whole pool, so I use syncoid to transfer those snapshots to another machine. Sanoid itself is responsible only for taking new snapshots and pruning old ones you no longer care about.
A few days ago CISPE, a trade association of European cloud providers, published a press release complaining about the new VMware licensing scheme and asking for regulators and legislators to intervene.
But VMware does not have a monopoly on virtualization software: I think that asking regulators to interfere is unnecessary and unwise, unless, of course, they wish to question the entire foundations of copyright. Which, on the other hand, could be an intriguing position that I would support...
I believe that over-reliance on a single supplier is a typical enterprise risk: in the past decade some companies have invested in developing their own virtualization infrastructure using free software, while others have decided to rely entirely on a single proprietary software vendor.
My only big concern is that many public sector organizations will continue to use VMware and pay the huge fees designed by Broadcom to extract the maximum amount of money from their customers. However, it is ultimately the citizens who pay these bills, and blaming the evil US corporation is a great way to avoid taking responsibility for these choices.
"Several CISPE members have stated that without the ability to license and use VMware products they will quickly go bankrupt and out of business."
The twentyfirst release of littler as a
CRAN package
landed on CRAN just now, following in the now eighteen year history (!!)
as a package started by Jeff in 2006, and joined
by me a few weeks later.
littler
is the first command-line interface for R as it predates
Rscript. It allows for piping as well for shebang
scripting via #!, uses command-line arguments more
consistently and still starts
faster. It also always loaded the methods package which
Rscript only began to do in recent years.
littler
lives on Linux and Unix, has its difficulties on macOS due to
yet-another-braindeadedness there (who ever thought case-insensitive
filesystems as a default were a good idea?) and simply does not exist on
Windows (yet the build system could be extended see RInside for
an existence proof, and volunteers are welcome!). See the FAQ
vignette on how to add it to your PATH. A few examples
are highlighted at the Github repo:, as well
as in the examples
vignette.
This release contains another fair number of small changes and
improvements to some of the scripts I use daily to build or test
packages, adds a new front-end ciw.r for the
recently-released ciw package
offering a CRAN Incoming Watcher , a new helper
installDeps2.r (extending installDeps.r), a
new doi-to-bib converter, allows a different temporary directory setup I
find helpful, deals with one corner deployment use, and more.
The full change description follows.
Changes in littler
version 0.3.20 (2024-03-23)
Changes in examples scripts
New (dependency-free) helper installDeps2.r to
install dependencies
Scripts rcc.r, tt.r,
tttf.r, tttlr.r use env argument
-S to set -t to r
tt.r can now fill in inst/tinytest if
it is present
New script ciw.r wrapping new package ciw
tttf.t can now use devtools
and its loadall
New script doi2bib.r to call the DOI converter REST
service (following a skeet by Richard McElreath)
Changes in package
The CI setup uses checkout@v4 and the r-ci-setup action
The Suggests: is a little tighter as we do not list all packages
optionally used in the the examples (as R does not check for it
either)
The package load messag can account for the rare build of R under
different architecture (Berwin Turlach in #117 closing
#116)
In non-vanilla mode, the temporary directory initialization in
re-run allowing for a non-standard temp dir via config settings
My CRANberries
service provides a comparison to the
previous release. Full details for the littler
release are provided as usual at the ChangeLog
page, and also on the package docs website.
The code is available via the GitHub repo, from
tarballs and now of course also from its CRAN page and
via install.packages("littler"). Binary packages are
available directly in Debian as
well as (in a day or two) Ubuntu binaries at
CRAN thanks to the tireless Michael Rutter.
Comments and suggestions are welcome at the GitHub repo.
If you like this or other open-source work I do, you can sponsor me at
GitHub.
This post is the second and final part of my Malaysia-Thailand trip. Feel free to check out the Malaysia part here if you haven t already. Kuala Lumpur to Bangkok is around 1500 km by road, and so I took a Malaysian Airlines flight to travel to Bangkok. The flight staff at the Kuala Lumpur only asked me for a return/onward flight and Thailand immigration asked a few questions but did not check any documents (obviously they checked and stamped my passport ;)). The currency of Thailand is the Thai baht, and 1 Thai baht = 2.5 Indian Rupees. The Thailand time is 1.5 hours ahead of Indian time (For example, if it is 12 noon in India, it will be 13:30 in Thailand).
I landed in Bangkok at around 3 PM local time. Fletcher was in Bangkok that time, leaving for Pattaya and we had booked the same hostel. So I took a bus to Pattaya from the airport. The next bus for which the tickets were available was at 7 PM, so I took tickets for that one. The bus ticket cost was 143 Thai Baht. I didn t buy SIM at the airport, thinking there must be better deals in the city. As a consequence, there was no way to contact Fletcher through internet. Although I had a few minutes call remaining out of my international roaming pack.
Our accommodation was near Jomtien beach, so I got off at the last stop, as the bus terminates at the Jomtien beach. Then I decided to walk towards my accommodation. I was using OsmAnd for navigation. However, the place was not marked on OpenStreetMap, and it turned out I missed the street my hostel was on and walked around 1 km further as I was chasing a similarly named incorrect hostel on OpenStreetMap. Then I asked for help from two men sitting at a caf . One of them said he will help me find the street my hostel is on. So, I walked with him, and he told me he lives in Thailand for many years, but he is from Kuwait. He also gave me valuable information. Like, he told me about shared hail-and-ride songthaews which run along the Jomtien Second Road and charge 10 Baht for any distance on their route. This tip significantly reduced our expenses. Further, he suggested me 7-Eleven shops for buying a local SIM. Like Malaysia, Thailand has 24/7 7-Eleven convenience stores, a lot of them not even 100 m apart.
The Kuwaiti person dropped me at the address where my hostel was. I tried searching for a person in-charge of that hostel, and soon I realized there was no reception. After asking for help from locals for some time, I bumped into Fletcher, who also came to this address and was searching for the same. After finding a friend, I felt a sigh of relief. Adjacent to the property, there was a hairdresser shop. We went there and asked about this property. The woman called the owner, and she also told us the required passcodes to go inside. Our accommodation was in a room on the second floor, which required us to put a passcode for opening. We entered the passcode and entered the room. So, we stayed at this hostel which had no reception. Due to this, it took 2 hours to find our room and enter. It reminded me of a difficult experience I had in Albania, where me and Akshat were not able to find our apartment in one of the hottest days and the owner didn t know our language.
Traveling from the place where the bus dropped me to the hostel, I saw streets were filled with bars and massage parlors, which was expected. Prostitutes were everywhere. We went out at night towards the beach and also roamed around in 7-Elevens to buy a SIM card for myself. I got a SIM for 7 day unlimited internet for 399 baht. Turns out that the rates of SIM cards at the airport were not so different from inside the city.
In terms of speaking English, locals didn t know English at all in both Pattaya and Bangkok. I normally don t expect locals to know English in a non-English speaking country, but the fact that Bangkok is one of the most visited places by tourists made me expect locals to know some English. Talking to locals is an integral part of travel for me, which I couldn t do a lot in Thailand. This aspect is much more important for me than going to touristy places.
So, we were in Pattaya. Next morning, Fletcher and I went to Tiger park using shared songthaew. After that, we planned to visit Pattaya Floating market which is near the Tiger Park, but we felt the ticket prices were higher than it was worth. Fletcher had to leave for Bangkok on that day. I suggested him to go to Suvarnabhumi Airport from the Jomtien beach bus terminal (this was the route I took the last day in opposite direction) to avoid traffic congestion inside Bangkok, as he can follow up with metro once he reaches the airport. From the floating market, we were walking in sweltering heat to reach the Jomtien beach. I tried asking for a lift and eventually got successful as a scooty stopped, and surprisingly the person gave a ride to both of us. He was from Delhi, so maybe that s the reason he stopped for us. Then we took a songthaew to the bus terminal and after having lunch, Fletcher left for Bangkok.
Next day I went to Bangkok, but Fletcher already left for Kuala Lumpur. Here I had booked a private room in a hotel (instead of a hostel) for four nights, mainly because of my luggage. This costed 5600 INR for four nights. It was 2 km from the metro station, which I used to walk both sides. In Bangkok, I visited Sukhumvit and Siam by metro. Going to some areas require crossing the Chao Phraya river. For this, I took Chao Phraya Express Boat for going to places like Khao San road and Wat Arun. I would recommend taking the boat ride as it had very good views. In Bangkok, I met a person from Pakistan staying in my hotel and so here also I got some company. But by the time I met him, my days were almost over. So, we went to a random restaurant selling Indian food where we ate some paneer dish with naan and that restaurant person was from Myanmar.
For eating, I mainly relied on fruits and convenience stores. Bananas were very tasty. This was the first time I saw banana flesh being yellow. Mangoes were delicious and pineapples were smaller and flavorful. I also ate Rose Apple, which I never had before. I had Chhole Kulche once in Sukhumvit. That was a little expensive as it costed 164 baht. I also used to buy premix coffee packets from 7-Eleven convenience stores and prepare them inside the stores.
My booking from Bangkok to Delhi was in Air India flight, and they were serving alcohol in the flight. I chose red wine, and this was my first time having alcohol in a flight.
Notes
In this whole trip spanning two weeks, I did not pay for drinking water (except for once in Pattaya which was 9 baht) and toilets. Bangkok and Kuala Lumpur have plenty of malls where you should find a free-of-cost toilet nearby. For drinking water, I relied mainly on my accommodation providing refillable water for my bottle.
Thailand seemed more expensive than Malaysia on average. Malaysia had discounted price due to the Chinese New year.
I liked Pattaya more than Bangkok. Maybe because Pattaya has beach and Bangkok doesn t. Pattaya seemed more lively, and I could meet and talk to a few people as opposed to Bangkok.
Chao Phraya River express boat costs 150 baht for one day where you can hop on and off to any boat.
Last week we held our promised miniDebConf in Santa Fe City, Santa Fe province,
Argentina just across the river from Paran , where I have spent almost six
beautiful months I will never forget.
Around 500 Kilometers North from Buenos Aires, Santa Fe and Paran are separated
by the beautiful and majestic Paran river, which flows from Brazil, marks the
Eastern border of Paraguay, and continues within Argentina as the heart of the
litoral region of the country, until it merges with the Uruguay river (you
guessed right the river marking the Eastern border of Argentina, first with
Brazil and then with Uruguay), and they become the R o de la Plata.
This was a short miniDebConf: we were lent the APUL union s building for the
weekend (thank you very much!); during Saturday, we had a cycle of talks, and on
sunday we had more of a hacklab logic, having some unstructured time to work
each on their own projects, and to talk and have a good time together.
We were five Debian people attending:
santiago debacle eamanu dererk gwolf @debian.org. My main contact to
kickstart organization was Mart n Bayo. Mart n was for many years the leader of
the Technical Degree on Free Software at Universidad Nacional del
Litoral,
where I was also a teacher for several years. Together with Leo Mart nez, also a
teacher at the tecnicatura, they contacted us with Guillermo and Gabriela,
from the APUL non-teaching-staff union of said university.
We had the following set of talks (for which there is a promise to get
electronic record, as APUL was kind enough to record them! of course, I will
push them to our usual conference video archiving service as soon as I get them)
Hour
Title (Spanish)
Title (English)
Presented by
10:00-10:25
Introducci n al Software Libre
Introduction to Free Software
Mart n Bayo
10:30-10:55
Debian y su comunidad
Debian and its community
Emanuel Arias
11:00-11:25
Por qu sigo contribuyendo a Debian despu s de 20 a os?
Why am I still contributing to Debian after 20 years?
Santiago Ruano
11:30-11:55
Mi identidad y el proyecto Debian: Qu es el llavero OpenPGP y por qu ?
My identity and the Debian project: What is the OpenPGP keyring and why?
Gunnar Wolf
12:00-13:00
Explorando las masculinidades en el contexto del Software Libre
Exploring masculinities in the context of Free Software
Gora Ortiz Fuentes - Jos Francisco Ferro
13:00-14:30
Lunch
14:30-14:55
Debian para el d a a d a
Debian for our every day
Leonardo Mart nez
15:00-15:25
Debian en las Raspberry Pi
Debian in the Raspberry Pi
Gunnar Wolf
15:30-15:55
Device Trees
Device Trees
Lisandro Dami n Nicanor Perez Meyer (videoconferencia)
16:00-16:25
Python en Debian
Python in Debian
Emmanuel Arias
16:30-16:55
Debian y XMPP en la medici n de viento para la energ a e lica
Debian and XMPP for wind measuring for eolic energy
Martin Borgert
As it always happens DebConf, miniDebConf and other Debian-related activities
are always fun, always productive, always a great opportunity to meet again our
decades-long friends. Lets see what comes next!
Utkarsh Gupta
did 11.25h (out of 26.75h assigned and 33.25h from previous period), thus carrying over 48.75 to the next month.
Evolution of the situation
In February, we have released 17 DLAs.
The number of DLAs published during February was a bit lower than usual, as there was much work going on in the area of triaging CVEs (a number of which turned out to not affect Debia buster, and others which ended up being duplicates, or otherwise determined to be invalid). Of the packages which did receive updates, notable were sudo (to fix a privilege management issue), and iwd and wpa (both of which suffered from authentication bypass vulnerabilities).
While this has already been already announced in the Freexian blog, we would like to mention here the start of the Long Term Support project for Samba 4.17. You can find all the important details in that post, but we would like to highlight that it is thanks to our LTS sponsors that we are able to fund the work from our partner, Catalyst, towards improving the security support of Samba in Debian 12 (Bookworm).
Thanks to our sponsors
Sponsors that joined recently are in bold.
Introduction
There have been many experiments with the sizes of computers, some of which have stayed around and some have gone away. The trend has been to make computers smaller, the early computers had buildings for them. Recently for come classes computers have started becoming as small as could be reasonably desired. For example phones are thin enough that they can blow away in a strong breeze, smart watches are much the same size as the old fashioned watches they replace, and NUC type computers are as small as they need to be given the size of monitors etc that they connect to.
This means that further development in the size and shape of computers will largely be determined by human factors.
I think we need to consider how computers might be developed to better suit humans and how to write free software to make such computers usable without being constrained by corporate interests.
Those of us who are involved in developing OSs and applications need to consider how to adjust to the changes and ideally anticipate changes. While we can t anticipate the details of future devices we can easily predict general trends such as being smaller, higher resolution, etc.
Desktop/Laptop PCs
When home computers first came out it was standard to have the keyboard in the main box, the Apple ][ being the most well known example. This has lost popularity due to the demand to have multiple options for a light keyboard that can be moved for convenience combined with multiple options for the box part. But it still pops up occasionally such as the Raspberry Pi 400 [1] which succeeds due to having the computer part being small and light. I think this type of computer will remain a niche product. It could be used in a add a screen to make a laptop as opposed to the add a keyboard to a tablet to make a laptop model but a tablet without a keyboard is more useful than a non-server PC without a display.
The PC as box with connections for keyboard, display, etc has a long future ahead of it. But the sizes will probably decrease (they should have stopped making PC cases to fit CD/DVD drives at least 10 years ago). The NUC size is a useful option and I think that DVD drives will stop being used for software soon which will allow a range of smaller form factors.
The regular laptop is something that will remain useful, but the tablet with detachable keyboard devices could take a lot of that market. Full functionality for all tasks requires a keyboard because at the moment text editing with a touch screen is an unsolved problem in computer science [2].
The Lenovo Thinkpad X1 Fold [3] and related Lenovo products are very interesting. Advances in materials allow laptops to be thinner and lighter which leaves the screen size as a major limitation to portability. There is a conflict between desiring a large screen to see lots of content and wanting a small size to carry and making a device foldable is an obvious solution that has recently become possible. Making a foldable laptop drives a desire for not having a permanently attached keyboard which then makes a touch screen keyboard a requirement. So this means that user interfaces for PCs have to be adapted to work well on touch screens. The Think line seems to be continuing the history of innovation that it had when owned by IBM. There are also a range of other laptops that have two regular screens so they are essentially the same as the Thinkpad X1 Fold but with two separate screens instead of one folding one, prices are as low as $600US.
I think that the typical interfaces for desktop PCs (EG MS-Windows and KDE) don t work well for small devices and touch devices and the Android interface generally isn t a good match for desktop systems. We need to invent more options for this. This is not a criticism of KDE, I use it every day and it works well. But it s designed for use cases that don t match new hardware that is on sale. As an aside it would be nice if Lenovo gave samples of their newest gear to people who make significant contributions to GUIs. Give a few Thinkpad Fold devices to KDE people, a few to GNOME people, and a few others to people involved in Wayland development and see how that promotes software development and future sales.
We also need to adopt features from laptops and phones into desktop PCs. When voice recognition software was first released in the 90s it was for desktop PCs, it didn t take off largely because it wasn t very accurate (none of them recognised my voice). Now voice recognition in phones is very accurate and it s very common for desktop PCs to have a webcam or headset with a microphone so it s time for this to be re-visited. GPS support in laptops is obviously useful and can work via Wifi location, via a USB GPS device, or via wwan mobile phone hardware (even if not used for wwan networking). Another possibility is using the same software interfaces as used for GPS on laptops for a static definition of location for a desktop PC or server.
The Interesting New Things
Watch Like
The wrist-watch [4] has been a standard format for easy access to data when on the go since it s military use at the end of the 19th century when the practical benefits beat the supposed femininity of the watch. So it seems most likely that they will continue to be in widespread use in computerised form for the forseeable future. For comparison smart phones have been in widespread use as pocket watches for about 10 years.
The question is how will watch computers end up? Will we have Dick Tracy style watch phones that you speak into? Will it be the current smart watch functionality of using the watch to answer a call which goes to a bluetooth headset? Will smart watches end up taking over the functionality of the calculator watch [5] which was popular in the 80 s? With today s technology you could easily have a fully capable PC strapped to your forearm, would that be useful?
Phone Like
Folding phones (originally popularised as Star Trek Tricorders) seem likely to have a long future ahead of them. Engineering technology has only recently developed to the stage of allowing them to work the way people would hope them to work (a folding screen with no gaps). Phones and tablets with multiple folds are coming out now [6]. This will allow phones to take much of the market share that tablets used to have while tablets and laptops merge at the high end. I ve previously written about Convergence between phones and desktop computers [7], the increased capabilities of phones adds to the case for Convergence.
Folding phones also provide new possibilities for the OS. The Oppo OnePlus Open and the Google Pixel Fold both have a UI based around using the two halves of the folding screen for separate data at some times. I think that the current user interfaces for desktop PCs don t properly take advantage of multiple monitors and the possibilities raised by folding phones only adds to the lack. My pet peeve with multiple monitor setups is when they don t make it obvious which monitor has keyboard focus so you send a CTRL-W or ALT-F4 to the wrong screen by mistake, it s a problem that also happens on a single screen but is worse with multiple screens. There are rumours of phones described as three fold (where three means the number of segments with two folds between them), it will be interesting to see how that goes.
Will phones go the same way as PCs in terms of having a separation between the compute bit and the input device? It s quite possible to have a compute device in the phone form factor inside a secure pocket which talks via Bluetooth to another device with a display and speakers. Then you could change your phone between a phone-size display and a tablet sized display easily and when using your phone a thief would not be able to easily steal the compute bit (which has passwords etc). Could the watch part of the phone (strapped to your wrist and difficult to steal) be the active part and have a tablet size device as an external display? There are already announcements of smart watches with up to 1GB of RAM (same as the Samsung Galaxy S3), that s enough for a lot of phone functionality.
The Rabbit R1 [8] and the Humane AI Pin [9] have some interesting possibilities for AI speech interfaces. Could that take over some of the current phone use? It seems that visually impaired people have been doing badly in the trend towards touch screen phones so an option of a voice interface phone would be a good option for them. As an aside I hope some people are working on AI stuff for FOSS devices.
Laptop Like
One interesting PC variant I just discovered is the Higole 2 Pro portable battery operated Windows PC with 5.5 touch screen [10]. It looks too thick to fit in the same pockets as current phones but is still very portable. The version with built in battery is $AU423 which is in the usual price range for low end laptops and tablets. I don t think this is the future of computing, but it is something that is usable today while we wait for foldable devices to take over.
The recent release of the Apple Vision Pro [11] has driven interest in 3D and head mounted computers. I think this could be a useful peripheral for a laptop or phone but it won t be part of a primary computing environment. In 2011 I wrote about the possibility of using augmented reality technology for providing a desktop computing environment [12]. I wonder how a Vision Pro would work for that on a train or passenger jet.
Another interesting thing that s on offer is a laptop with 7 touch screen beside the keyboard [13]. It seems that someone just looked at what parts are available cheaply in China (due to being parts of more popular devices) and what could fit together. I think a keyboard should be central to the monitor for serious typing, but there may be useful corner cases where typing isn t that common and a touch-screen display is of use. Developing a range of strange hardware and then seeing which ones get adopted is a good thing and an advantage of Ali Express and Temu.
Useful Hardware for Developing These Things
I recently bought a second hand Thinkpad X1 Yoga Gen3 for $359 which has stylus support [14], and it s generally a great little laptop in every other way. There s a common failure case of that model where touch support for fingers breaks but the stylus still works which allows it to be used for testing touch screen functionality while making it cheap.
The PineTime is a nice smart watch from Pine64 which is designed to be open [15]. I am quite happy with it but haven t done much with it yet (apart from wearing it every day and getting alerts etc from Android). At $50 when delivered to Australia it s significantly more expensive than most smart watches with similar features but still a lot cheaper than the high end ones. Also the Raspberry Pi Watch [16] is interesting too.
The PinePhonePro is an OK phone made to open standards but it s hardware isn t as good as Android phones released in the same year [17]. I ve got some useful stuff done on mine, but the battery life is a major issue and the screen resolution is low. The Librem 5 phone from Purism has a better hardware design for security with switches to disable functionality [18], but it s even slower than the PinePhonePro. These are good devices for test and development but not ones that many people would be excited to use every day.
Wwan hardware (for accessing the phone network) in M.2 form factor can be obtained for free if you have access to old/broken laptops. Such devices start at about $35 if you want to buy one. USB GPS devices also start at about $35 so probably not worth getting if you can get a wwan device that does GPS as well.
What We Must Do
Debian appears to have some voice input software in the pocketsphinx package but no documentation on how it s to be used. This would be a good thing to document, I spent 15 mins looking at it and couldn t get it going.
To take advantage of the hardware features in phones we need software support and we ideally don t want free software to lag too far behind proprietary software which IMHO means the typical Android setup for phones/tablets.
Support for changing screen resolution is already there as is support for touch screens. Support for adapting the GUI to changed screen size is something that needs to be done even today s hardware of connecting a small laptop to an external monitor doesn t have the ideal functionality for changing the UI. There also seem to be some limitations in touch screen support with multiple screens, I haven t investigated this properly yet, it definitely doesn t work in an expected manner in Ubuntu 22.04 and I haven t yet tested the combinations on Debian/Unstable.
ML is becoming a big thing and it has some interesting use cases for small devices where a smart device can compensate for limited input options. There s a lot of work that needs to be done in this area and we are limited by the fact that we can t just rip off the work of other people for use as training data in the way that corporations do.
Security is more important for devices that are at high risk of theft. The vast majority of free software installations are way behind Android in terms of security and we need to address that. I have some ideas for improvement but there is always a conflict between security and usability and while Android is usable for it s own special apps it s not usable in a I want to run applications that use any files from any other applicationsin any way I want sense. My post about Sandboxing Phone apps is relevant for people who are interested in this [19]. We also need to extend security models to cope with things like ok google type functionality which has the potential to be a bug and the emerging class of LLM based attacks.
I will write more posts about these thing.
Please write comments mentioning FOSS hardware and software projects that address these issues and also documentation for such things.
/usr-move, by Helmut Grohne
Much of the work was spent on handling interaction with time time64 transition
and sending patches for mitigating fallout. The set of packages relevant to
debootstrap is mostly converted and the patches for glibc and base-files
have been refined due to feedback from the upload to Ubuntu noble. Beyond this,
he sent patches for all remaining packages that cannot move their files with
dh-sequence-movetousr and packages using dpkg-divert in ways that dumat
would not recognize.
Upcoming improvements to Salsa CI, by Santiago Ruano Rinc n
Last month, Santiago Ruano Rinc n started the work on integrating sbuild into
the Salsa CI pipeline. Initially, Santiago used sbuild with the unshare
chroot mode. However, after discussion with josch, jochensp and helmut (thanks
to them!), it turns out that the unshare mode is not the most suitable for the
pipeline, since the level of isolation it provides is not needed, and some test
suites would fail (eg: krb5). Additionally, one of the requirements of the
build job is the use of ccache, since it is needed by some C/C++ large projects
to reduce the compilation time. In the preliminary work with unshare last
month, it was not possible to make ccache to work.
Finally, Santiago changed the chroot mode, and now has a couple of POC (cf:
1
and 2)
that rely on the schroot and sudo, respectively. And the good news is that
ccache is successfully used by sbuild with schroot!
The image here comes from an example of building grep. At the end of the
build, ccache -s shows the statistics of the cache that it used, and so a
little more than half of the calls of that job were cacheable. The most
important pieces are in place to finish the integration of sbuild into the
pipeline.
Other than that, Santiago also reviewed the very useful
merge request !346,
made by IOhannes zm lnig to autodetect the release from debian/changelog. As
agreed with IOhannes, Santiago is preparing a merge request to include the
release autodetection use case in the very own Salsa CI s CI.
Packaging simplemonitor, by Carles Pina i Estany
Carles started using simplemonitor in
2017, opened a
WNPP bug in 2022
and started packaging simplemonitor dependencies in October 2023. After
packaging five direct and indirect dependencies, Carles finally uploaded
simplemonitor to unstable in February.
During the packaging of simplemonitor, Carles reported
a few issues
to upstream. Some of these were to make the simplemonitor package build and run
tests reproducibly. A reproducibility issue was reprotest overriding the
timezone, which broke simplemonitor s tests. There have been discussions on
resolving this upstream in simplemonitor and
in reprotest,
too.
Carles also started upgrading or improving some of simplemonitor s dependencies.
Miscellaneous contributions
Stefano Rivera spent some time doing admin on debian.social infrastructure.
Including dealing with a spike of abuse on the Jitsi server.
Stefano started to prepare a new release of dh-python, including cleaning out
a lot of old Python 2.x related code. Thanks to Niels Thykier (outside
Freexian) for spear-heading this work.
DebConf 24 planning is beginning. Stefano discussed venues and finances with
the local team and remotely supported a site-visit by Nattie (outside
Freexian).
Also in the DebConf 24 context, Santiago took part in discussions and
preparations related to the Content Team.
A JIT bug was
reported against pypy3 in Debian Bookworm. Stefano bisected the upstream
history to find the patch (it was already resolved upstream) and released an
update to pypy3 in bookworm.
Enrico participated in /usr-merge discussions with Helmut.
Colin dug into a cluster of celery build failures and tracked the hardest bit
down to a Python 3.12 regression, now
fixed in unstable. celery should be back in testing once the 64-bit time_t
migration is out of the way.
Thorsten Alteholz uploaded a new upstream version of cpdb-libs. Unfortunately
upstream changed the naming of their release tags, so updating the watch file
was a bit demanding. Anyway this version 2.0 is a huge step towards
introduction of the new Common Print Dialog Backends.
Helmut send patches for 48 cross build failures.
Helmut changed debvm to use mkfs.ext4 instead of genext2fs.
Helmut sent a
debci MR
for improving collector robustness.
In preparation for DebConf 25, Santiago worked on the Brest Bid.
To achieve my aims regarding Convergence of mobile phone and PC [1] I need something a big bigger than the 4G of RAM that s in the PinePhone Pro [2]. The PinePhonePro was released at the end of 2021 but has a SoC that was first released in 2016. That SoC seems to compare well to the ones used in the Pixel and Pixel 2 phones that were released in the same time period so it s not a bad SoC, but it doesn t compare well to more recent Android devices and it also isn t a great fit for the non-Android things I want to do. Also the PinePhonePro and Librem5 have relatively short battery life so reusing Android functionality for power saving could provide a real benefit. So I want a phone designed for the mass market that I can use for running Debian.
PostmarketOS
One thing I m definitely not going to do is attempt a full port of Linux to a different platform or support of kernel etc. So I need to choose a device that already has support from a somewhat free Linux system. The PostmarketOS system is the first I considered, the PostmarketOS Wiki page of supported devices [3] was the first place I looked. The main supported devices are the PinePhone (not Pro) and the Librem5, both of which are under-powered. For the community devices there seems to be nothing that supports calls, SMS, mobile data, and USB-OTG and which also has 4G of RAM or more. If I skip USB-OTG (which presumably means I d have to get dock functionality via wifi not impossible but not great) then I m left with the SHIFT6mq which was never sold in Australia and the Xiomi POCO F1 which doesn t appear to be available on ebay.
LineageOS
The libhybris libraries are a compatibility layer between Android and glibc programs [4]. Which includes running Wayland with Android display drivers. So running a somewhat standard Linux desktop on top of an Android kernel should be possible. Here is a table of the LineageOS supported devices that seem to have a useful feature set and are available in Australia and which could be used for running Debian with firmware and drivers copied from Android. I only checked LineageOS as it seems to be the main free Android build.
I just bought a Note 9 with 128G of storage and 6G of RAM for $109 to try out Droidian, it has some screen burn but that s OK for a test system and if I end up using it seriously I ll just buy another that s in as-new condition. With no support for an external display I ll need to setup a software dock to do Convergence, but that s not a serious problem. If I end up making a Note 9 with Droidian my daily driver then I ll use the 512G/8G model for that and use the cheap one for testing.
Mobian
I should have checked the Mobian list first as it s the main Debian variant for phones.
From the Mobian Devices list [16] the OnePlus 6T has 8G of RAM or more but isn t available in Australia and costs more than $400 when imported. The PocoPhone F1 doesn t seem to be available on ebay. The Shift6mq is made by a German company with similar aims to the Fairphone [17], it looks nice but costs E577 which is more than I want to spend and isn t on the officially supported list.
Smart Watches
The same issues apply to smart watches. AstereoidOS is a free smart phone OS designed for closed hardware [18]. I don t have time to get involved in this sort of thing though, I can t hack on every device I use.
FTP master
This month I accepted 242 and rejected 42 packages. The overall number of packages that got accepted was 251.
This was just a short month and the weather outside was not really motivating. I hope it will be better in March.
Debian LTS
This was my hundred-sixteenth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.
During my allocated time I uploaded:
[DLA 3739-1] libjwt security update for one CVE to fix some constant-time-for-execution-issue
[#1064551] Bookworm PU bug for libjwt; upload after approval
[DLA 3741-1] engrampa security update for one CVE to fix a path traversal issue with CPIO archives
[#1060186] Bookworm PU-bug for libde265 was flagged for acceptance
[#1056935] Bullseye PU-bug for libde265 was flagged for acceptance
I also started to work on qtbase-opensource-src (an update is needed for ELTS, so an LTS update seems to be appropriate as well, especially as there are postponed CVE).
Debian ELTS
This month was the sixty-seventth ELTS month. During my allocated time I uploaded:
[ELA-1047-1]bind9 security update for one CVE to fix an stack exhaustion issue in Jessie and Stretch
The upload of bind9 was a bit exciting, but all occuring issues with the new upload workflow could be quickly fixed by Helmut and the packages finally reached their destination. I wonder why it is always me who stumbles upon special cases? This month I also worked on the Jessie and Stretch updates for exim4. I also started to work on an update for qtbase-opensource-src in Stretch (and LTS and other releases as well).
Debian Printing
This month I uploaded new upstream versions of:
This work is generously funded by Freexian!
Debian Matomo
I started a new team debian-matomo-maintainers. Within this team all matomo related packages should be handled. PHP PEAR or PECL packages shall be still maintained in their corresponding teams.
This month I uploaded:
Recently, I got a new laptop and had to set it up so I could start using it. But
I wasn't really in the mood to go through the same old steps which I had
explained in this post earlier. I was complaining about
this to my colleague, and there came the suggestion of why not copy the entire
disk to the new laptop. Though it sounded like an interesting idea to me, I had
my doubts, so here is what I told him in return.
I don't have the tools to open my old laptop and connect the new disk over
USB to my new laptop.
I use full disk encryption, and my old laptop has a 512GB disk, whereas the
new laptop has a 1TB NVME, and I'm not so familiar with resizing LUKS.
He promptly suggested both could be done. For step 1, just expose the disk using
NVME over TCP and connect it over the network and do a full disk copy, and the
rest is pretty simple to achieve. In short, he suggested the following:
Export the disk using nvmet-tcp from the old laptop.
Do a disk copy to the new laptop.
Resize the partition to use the full 1TB.
Resize LUKS.
Finally, resize the BTRFS root disk.
Exporting Disk over NVME TCP
The easiest way suggested by my colleague to do this is using
systemd-storagetm.service.
This service can be invoked by simply booting into storage-target-mode.target
by specifying rd.systemd.unit=storage-target-mode.target. But he suggested not
to use this as I need to tweak the dracut initrd image to involve network
services as well as configuring WiFi from this mode is a painful thing to do.
So alternatively, I simply booted both my laptops with GRML rescue CD. And the
following step was done to export the NVME disk on my current laptop using the
nvmet-tcp module of Linux:
modprobenvmet-tcp
cd/sys/kernel/config/nvmet
mkdirports/0
cdports/0
echo"ipv4">addr_adrfam
echo0.0.0.0>addr_traaddr
echo4420>addr_trsvcid
echotcp>addr_trtype
cd/sys/kernel/config/nvmet/subsystems
mkdirtestnqn
echo1>testnqn/allow_any_host
mkdirtestnqn/namespaces/1
cdtestnqn
# replace the device name with the disk you want to exportecho"/dev/nvme0n1">namespaces/1/device_path
echo1>namespaces/1/enable
ln-s"../../subsystems/testnqn"/sys/kernel/config/nvmet/ports/0/subsystems/testnqn
These steps ensure that the device is now exported using NVME over TCP. The next
step is to detect this on the new laptop and connect the device:
Finally, nvme list shows the device which is connected to the new laptop,
and we can proceed with the next step, which is to do the disk copy.
Copying the Disk
I simply used the dd command to copy the root disk to my new laptop. Since
the new laptop didn't have an Ethernet port, I had to rely only on WiFi, and it
took about 7 and a half hours to copy the entire 512GB to the new laptop. The
speed at which I was copying was about 18-20MB/s. The other option would have
been to create an initial partition and file system and do an rsync of the root
disk or use BTRFS itself for file system transfer.
Resizing Partition and LUKS Container
The final part was very easy. When I launched parted, it detected that the
partition table does not match the disk size and asked if it can fix it, and I
said yes. Next, I had to install cloud-guest-utils to get growpart to
fix the second partition, and the following command extended the partition to
the full 1TB:
growpart/dev/nvem0n1p2
Next, I used cryptsetup-resize to increase the LUKS container size.
Finally, I rebooted into the disk, and everything worked fine. After logging
into the system, I resized the BTRFS file system. BTRFS requires the system to
be mounted for resize, so I could not attempt it in live boot.
btfsfielsystemresizemax/
Conclussion
The only benefit of this entire process is that I have a new laptop, but I still
feel like I'm using my existing laptop. Typically, setting up a new laptop takes
about a week or two to completely get adjusted, but in this case, that entire
time is saved.
An added benefit is that I learned how to export disks using NVME over TCP,
thanks to my colleague. This new knowledge adds to the value of the experience.
Posted on March 10, 2024
Tags: madeof:atoms, craft:cooking
A few notes on what we had for lunch, to be able to repeat it after the
summer.
There were a number of food intolerance related restrictions which meant
that the traditional lasagna recipe wasn t an option; the result still
tasted good, but it was a bit softer and messier to take out of the pan
and into the dishes.
On Saturday afternoon we made fresh no-egg pasta with 200 g (durum)
flour and 100 g water, after about 1 hour it was divided in 6 parts and
rolled to thickness #6 on the pasta machine.
Meanwhile, about 500 ml of low fat almost-rag -like meat sauce was taken
out of the freezer: this was a bit too little, 750 ml would have been
better.
On Saturday evening we made a sauce with 1 l of low-fat milk and 80 g of
flour, and the meat sauce was heated up.
Then everything was put in a 28 cm 23 cm pan, with 6 layers of pasta and
7 layers of the two sauces, and left to cool down.
And on Sunday morning it was baked for 35 min in the oven at 180 C.
With 3 people we only had about two thirds of it.
Next time I think we should try to use 400 - 500 g of flour (so that
it s easier to work by machine), 2 l of milk, 1.5 l of meat sauce and
divide it into 3 pans: one to eat the next day and two to freeze
(uncooked) for another day.
No pictures, because by the time I thought about writing a post we were
already more than halfway through eating it :)
After 4? 5? or so years of wanting to learn Rust, over the past 4 or
so months I finally bit the bullet and found the motivation to write
some Rust. And the subject.
And I was, and still am, thoroughly surprised. It s like someone took
Haskell, simplified it to some extents, and wrote a systems language
out of it. Writing Rust after Haskell seems easy, and pleasant, and you:
don t have to care about unintended laziness which causes memory
leaks (stuck memory, more like).
don t have to care about GC eating too much of your multi-threaded
RTS.
can be happy that there s lots of activity and buzz around the language.
can be happy for generating very small, efficient binaries that feel
right at home on Raspberry Pi, especially not the 5.
are very happy that error handling is done right (Option and Result,
not like Go )
On the other hand:
there are no actual monads; the ? operator kind-of-looks-like
being in do blocks, but only and only for Option and Result,
sadly.
there s no Stackage, it s like having
only Hackage available, and you can hope all packages work together
well.
most packaging is designed to work only against upstream/online
crates.io, so offline packaging is doable but not native (from
what I ve seen).
However, overall, one can clearly see there s more movement in Rust,
and the quality of some parts of the toolchain is better (looking at
you, rust-analyzer, compared to HLS).
So, with that, I ve just tagged photo-backlog-exporter
v0.1.0. It s
a port of a Python script that was run as a textfile collector, which
meant updates every ~15 minutes, since it was a bit slow to start,
which I then rewrote in Go (but I don t like Go the language, plus the
GC - if I have to deal with a GC, I d rather write Haskell), then
finally rewrote in Rust.
What does this do? It exports metrics for Prometheus based on the
count, age and distribution of files in a directory. These files
being, for me, the pictures I still have to sort, cull and process,
because I never have enough free time to clear out the backlog. The
script is kind of designed to work together with Corydalis, but since
it doesn t care about file content, it can also double (easily) as
simple file count/age exporter .
And to my surprise, writing in Rust is soo pleasant, that the
feature list is greater than the original Python script, and -
compared to that untested script - I ve rather easily achieved a very
high coverage ratio. Rust has multiple types of tests, and the
combination allows getting pretty down to details on testing:
region coverage: >80%
function coverage: >89% (so close here!)
line coverage: >95%
I had to combine a (large) number of testing crates to get it
expressive enough, but it was worth the effort. The last find from
yesterday, assert_cmd, is
excellent to describe testing/assertion in Rust itself, rather than
via a separate, new DSL, like I was using shelltest for, in Haskell.
To some extent, I feel like I found the missing arrow in the
quiver. Haskell is good, quite very good for some type of workloads,
but of course not all, and Rust complements that very nicely, with
lots of overlap (as expected). Python can fill in any quick-and-dirty
scripting needed. And I just need to learn more frontend, specifically
Typescript (the language, not referring to any specific
libraries/frameworks), and I ll be ready for AI to take over coding
So, for now, I ll need to split my free time coding between all of the
above, and keep exercising my skills. But so glad to have found a
good new language!
Posted on March 9, 2024
Tags: madeof:atoms, craft:sewing, FreeSoftWear
After making my Elastic Neck Top
I knew I wanted to make another one less constrained by the amount of
available fabric.
I had a big cut of white cotton voile, I bought some more swimsuit
elastic, and I also had a spool of n 100 sewing cotton, but then I
postponed the project for a while I was working on other things.
Then FOSDEM 2024 arrived, I was going to remote it, and I was working on
my Augusta Stays, but
I knew that in the middle of FOSDEM I risked getting to the stage where
I needed to leave the computer to try the stays on: not something really
compatible with the frenetic pace of a FOSDEM weekend, even one spent at
home.
I needed a backup project1, and this was perfect: I already
had everything I needed, the pattern and instructions were already on my
site (so I didn t need to take pictures while working), and it was
mostly a lot of straight seams, perfect while watching conference
videos.
So, on the Friday before FOSDEM I cut all of the pieces, then spent
three quarters of FOSDEM on the stays, and when I reached the point
where I needed to stop for a fit test I started on the top.
Like the first one, everything was sewn by hand, and one week after I
had started everything was assembled, except for the casings for the
elastic at the neck and cuffs, which required about 10 km of sewing, and
even if it was just a running stitch it made me want to reconsider my
lifestyle choices a few times: there was really no reason for me not
to do just those seams by machine in a few minutes.
Instead I kept sewing by hand whenever I had time for it, and on the
next weekend it was ready. We had a rare day of sun during the weekend,
so I wore my thermal underwear, some other layer, a scarf around my
neck, and went outside with my SO to have a batch of pictures taken
(those in the jeans posts, and others for a post I haven t written yet.
Have I mentioned I have a backlog?).
And then the top went into the wardrobe, and it will come out again when
the weather will be a bit warmer. Or maybe it will be used under the
Augusta Stays, since I don t have a 1700 chemise yet, but that requires
actually finishing them.
The pattern for this project was already online,
of course, but I ve added a picture of the casing to the relevant
section, and everything is as usual #FreeSoftWear.
yes, I could have worked on some knitting WIP, but lately
I m more in a sewing mood.
My brain is currently suffering from an overload caused by grading student
assignments.
In search of a somewhat productive way to procrastinate, I thought I
would share a small script I wrote sometime in 2023 to facilitate my grading
work.
I use Moodle for all the classes I teach and students use it to hand me out
their papers. When I'm ready to grade them, I download the ZIP archive Moodle
provides containing all their PDF files and comment them using xournalpp and
my Wacom tablet.
Once this is done, I have a directory structure that looks like this:
Assignment FooBar/
Student A_21100_assignsubmission_file
graded paper.pdf
Student A's perfectly named assignment.pdf
Student A's perfectly named assignment.xopp
Student B_21094_assignsubmission_file
graded paper.pdf
Student B's perfectly named assignment.pdf
Student B's perfectly named assignment.xopp
Student C_21093_assignsubmission_file
graded paper.pdf
Student C's perfectly named assignment.pdf
Student C's perfectly named assignment.xopp
Before I can upload files back to Moodle, this directory needs to be copied (I
have to keep the original files), cleaned of everything but the graded
paper.pdf files and compressed in a ZIP.
You can see how this can quickly get tedious to do by hand. Not being a
complete tool, I often resorted to crafting a few spurious shell one-liners
each time I had to do this1. Eventually I got tired of ctrl-R-ing my
shell history and wrote something reusable.
Behold this script! When I began writing this post, I was certain I had cheaped
out on my 2021 New Year's resolution and written it in Shell, but glory!, it
seems I used a proper scripting language instead.
#!/usr/bin/python3# Copyright (C) 2023, Louis-Philippe V ronneau <pollo@debian.org>## This program is free software: you can redistribute it and/or modify# it under the terms of the GNU General Public License as published by# the Free Software Foundation, either version 3 of the License, or# (at your option) any later version.## This program is distributed in the hope that it will be useful,# but WITHOUT ANY WARRANTY; without even the implied warranty of# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the# GNU General Public License for more details.## You should have received a copy of the GNU General Public License# along with this program. If not, see <http://www.gnu.org/licenses/>."""This script aims to take a directory containing PDF files exported via theMoodle mass download function, remove everything but the final files to submitback to the students and zip it back.usage: ./moodle-zip.py <target_dir>"""importosimportshutilimportsysimporttempfilefromfnmatchimportfnmatchdefsanity(directory):"""Run sanity checks before doing anything else"""base_directory=os.path.basename(os.path.normpath(directory))ifnotos.path.isdir(directory):sys.exit(f"Target directory directory is not a valid directory")ifos.path.exists(f"/tmp/base_directory.zip"):sys.exit(f"Final ZIP file path '/tmp/base_directory.zip' already exists")forroot,dirnames,_inos.walk(directory):fordirnameindirnames:corrige_present=Falseforfileinos.listdir(os.path.join(root,dirname)):iffnmatch(file,'graded paper.pdf'):corrige_present=Trueifcorrige_presentisFalse:sys.exit(f"Directory dirname does not contain a 'graded paper.pdf' file")defclean(directory):"""Remove superfluous files, to keep only the graded PDF"""withtempfile.TemporaryDirectory()astmp_dir:shutil.copytree(directory,tmp_dir,dirs_exist_ok=True)forroot,_,filenamesinos.walk(tmp_dir):forfileinfilenames:ifnotfnmatch(file,'graded paper.pdf'):os.remove(os.path.join(root,file))compress(tmp_dir,directory)defcompress(directory,target_dir):"""Compress directory into a ZIP file and save it to the target dir"""target_dir=os.path.basename(os.path.normpath(target_dir))shutil.make_archive(f"/tmp/target_dir",'zip',directory)print(f"Final ZIP file has been saved to '/tmp/target_dir.zip'")defmain():"""Main function"""target_dir=sys.argv[1]sanity(target_dir)clean(target_dir)if__name__=="__main__":main()
If for some reason you happen to have a similar workflow as I and end up using
this script, hit me up?
Now, back to grading...
If I recall correctly, the lazy way I used to do it involved
copying the directory, renaming the extension of the graded paper.pdf
files, deleting all .pdf and .xopp files using find and changing
graded paper.foobar back to a PDF. Some clever regex or learning awk
from the ground up could've probably done the job as well, but you know,
that would have required using my brain and spending spoons...
Posted on March 8, 2024
Tags: madeof:atoms, craft:sewing, FreeSoftWear
I had finished sewing my jeans, I had a scant 50 cm of elastic denim
left.
Unrelated to that, I had just finished drafting a vest with Valentina,
after the Cutters Practical Guide to the Cutting of Ladies Garments.
A new pattern requires a (wearable) mockup. 50 cm of leftover fabric
require a quick project. The decision didn t take a lot of time.
As a mockup, I kept things easy: single layer with no lining, some edges
finished with a topstitched hem and some with bias tape, and plain tape
on the fronts, to give more support to the buttons and buttonholes.
I did add pockets: not real welt ones (too much effort on denim), but
simple slits covered by flaps.
piece; there is a slit in the middle that has been finished with
topstitching.
To do them I marked the slits, then I cut two rectangles of pocketing
fabric that should have been as wide as the slit + 1.5 cm (width of the
pocket) + 3 cm (allowances) and twice the sum of as tall as I wanted the
pocket to be plus 1 cm (space above the slit) + 1.5 cm (allowances).
Then I put the rectangle on the right side of the denim, aligned so that
the top edge was 2.5 cm above the slit, sewed 2 mm from the slit, cut,
turned the pocketing to the wrong side, pressed and topstitched 2 mm
from the fold to finish the slit.
other sides; it does not lay flat on the right side of the fabric
because the finished slit (hidden in the picture) is pulling it.
Then I turned the pocketing back to the right side, folded it in half,
sewed the side and top seams with a small allowance, pressed and turned
it again to the wrong side, where I sewed the seams again to make a
french seam.
And finally, a simple rectangular denim flap was topstitched to the
front, covering the slits.
I wasn t as precise as I should have been and the pockets aren t exactly
the right size, but they will do to see if I got the positions right (I
think that the breast one should be a cm or so lower, the waist ones are
fine), and of course they are tiny, but that s to be expected from a
waistcoat.
The other thing that wasn t exactly as expected is the back: the pattern
splits the bottom part of the back to give it sufficient spring over
the hips . The book is probably published in 1892, but I had already
found when drafting the foundation skirt that its idea of hips
includes a bit of structure. The enough steel to carry a book or a cup
of tea kind of structure. I should have expected a lot of spring, and
indeed that s what I got.
To fit the bottom part of the back on the limited amount of fabric I had
to piece it, and I suspect that the flat felled seam in the center is
helping it sticking out; I don t think it s exactly bad, but it is
a peculiar look.
Also, I had to cut the back on the fold, rather than having a seam in
the middle and the grain on a different angle.
Anyway, my next waistcoat project is going to have a linen-cotton lining
and silk fashion fabric, and I d say that the pattern is good enough
that I can do a few small fixes and cut it directly in the lining, using
it as a second mockup.
As for the wrinkles, there is quite a bit, but it looks something that
will be solved by a bit of lightweight boning in the side seams and in
the front; it will be seen in the second mockup and the finished
waistcoat.
As for this one, it s definitely going to get some wear as is, in casual
contexts. Except. Well, it s a denim waistcoat, right? With a very
different cut from the get a denim jacket and rip out the sleeves , but
still a denim waistcoat, right? The kind that you cover in patches,
right?
And I may have screenprinted a home sewing is killing fashion patch
some time ago, using the SVG from wikimedia commons / the Home
Taping is Killing Music page.
And. Maybe I ll wait until I have finished the real waistcoat. But I
suspect that one, and other sewing / costuming patches may happen in the
future.
No regrets, as the words on my seam ripper pin say, right? :D
Many of us grew up used to having some news sources we could implicitly trust, such as well-positioned newspapers and radio or TV news programs. We knew they would only hire responsible journalists rather than risk diluting public trust and losing their brand s value. However, with the advent of the Internet and social media, we are witnessing what has been termed the post-truth phenomenon. The undeniable freedom that horizontal communication has given us automatically brings with it the emergence of filter bubbles and echo chambers, and truth seems to become a group belief.
Contrary to my original expectations, the core topic of the book is not about how current-day media brings about post-truth mindsets. Instead it goes into a much deeper philosophical debate: What is truth? Does truth exist by itself, objectively, or is it a social construct? If activists with different political leanings debate a given subject, is it even possible for them to understand the same points for debate, or do they truly experience parallel realities?
The author wrote this book clearly prompted by the unprecedented events that took place in 2020, as the COVID-19 crisis forced humanity into isolation and online communication. Donald Trump is explicitly and repeatedly presented throughout the book as an example of an actor that took advantage of the distortions caused by post-truth.
The first chapter frames the narrative from the perspective of information flow over the last several decades, on how the emergence of horizontal, uncensored communication free of editorial oversight started empowering the netizens and created a temporary information flow utopia. But soon afterwards, algorithmic gatekeepers started appearing, creating a set of personalized distortions on reality; users started getting news aligned to what they already showed interest in. This led to an increase in polarization and the growth of narrative-framing-specific communities that served as echo chambers for disjoint views on reality. This led to the growth of conspiracy theories and, necessarily, to the science denial and pseudoscience that reached unimaginable peaks during the COVID-19 crisis. Finally, when readers decide based on completely subjective criteria whether a scientific theory such as global warming is true or propaganda, or question what most traditional news outlets present as facts, we face the phenomenon known as fake news. Fake news leads to post-truth, a state where it is impossible to distinguish between truth and falsehood, and serves only a rhetorical function, making rational discourse impossible.
Toward the end of the first chapter, the tone of writing quickly turns away from describing developments in the spread of news and facts over the last decades and quickly goes deep into philosophy, into the very thorny subject pursued by said discipline for millennia: How can truth be defined? Can different perspectives bring about different truth values for any given idea? Does truth depend on the observer, on their knowledge of facts, on their moral compass or in their honest opinions?
Zoglauer dives into epistemology, following various thinkers ideas on what can be understood as truth: constructivism (whether knowledge and truth values can be learnt by an individual building from their personal experience), objectivity (whether experiences, and thus truth, are universal, or whether they are naturally individual), and whether we can proclaim something to be true when it corresponds to reality. For the final chapter, he dives into the role information and knowledge play in assigning and understanding truth value, as well as the value of second-hand knowledge: Do we really own knowledge because we can look up facts online (even if we carefully check the sources)? Can I, without any medical training, diagnose a sickness and treatment by honestly and carefully looking up its symptoms in medical databases?
Wrapping up, while I very much enjoyed reading this book, I must confess it is completely different from what I expected. This book digs much more into the abstract than into information flow in modern society, or the impact on early 2020s politics as its editorial description suggests. At 160 pages, the book is not a heavy read, and Zoglauer s writing style is easy to follow, even across the potentially very deep topics it presents. Its main readership is not necessarily computing practitioners or academics. However, for people trying to better understand epistemology through its expressions in the modern world, it will be a very worthy read.